I Locked My GPU Clocks And Now It Runs Forever
I have an RTX 5090 OC LC edition. Liquid cooled. Overclocked out of the box. It is the kind of card that makes people ask uncomfortable questions about my financial decisions. I have no good answers.
I also locked the clocks. Manually. Fixed core. Fixed memory. No boost variability. No power throttling. Just pure consistent frequencies. My training runs do not fluctuate anymore. They run at the same speed from start to finish. It is beautiful.
Locking GPU clocks is like putting your car in manual mode. You control everything. You are responsible for everything. If it crashes, that is on you.
The Clock Breakdown
Here is what we are working with. Base 5090 specs. OC LC edition specs. My actual locked clocks. The numbers are big. The stability is bigger.
| Specification | Base RTX 5090 | OC LC Edition | My Locked Clocks |
|---|---|---|---|
| Base Clock | 2017 MHz | 2017 MHz | 2017 MHz |
| Boost Clock | 2407 MHz | 2610 MHz | 2757 MHz |
| Core Overclock | +0 MHz | +203 MHz | +350 MHz |
| Memory Transfer Rate | 28 Gbps | 28 Gbps | 34 Gbps |
| Memory Overclock | +0 Gbps | +0 Gbps | +6 Gbps |
| VRAM | 32 GB GDDR7 | 32 GB GDDR7 | 32 GB GDDR7 |
| Memory Bus | 512-bit | 512-bit | 512-bit |
| Cooling | Air / AIO | AIO Liquid | AIO Liquid |
The Overclock Numbers
Yes. One hundred percent stable. I have run training jobs for twelve hours straight. No crashes. No artifacts. No thermal throttling. The liquid cooler keeps temperatures in check. The clocks stay locked. The loss curve goes down. Everything is perfect.
Power And Thermals
My GPU draws 600W constant. Not sometimes. Not when it feels like it. Always. Under load it is a flat line. No spikes. No dips. The power limit is locked along with the clocks. I know exactly what my electricity bill will look like.
Temperature is 60C. One hundred percent of the time under load. Not 58C. Not 62C. Sixty. The liquid cooling handles the heat so efficiently that the GPU never varies. It is almost suspicious. I keep waiting for it to spike. It never does.
Time: 00:00 | Power: 600W | Temp: 60C | Fan: 50%
Time: 03:00 | Power: 600W | Temp: 60C | Fan: 50%
Time: 06:00 | Power: 600W | Temp: 60C | Fan: 50%
Time: 09:00 | Power: 600W | Temp: 60C | Fan: 50%
Time: 12:00 | Power: 600W | Temp: 60C | Fan: 50%
# It is boring. It is beautiful.
The Quiet Factor
Here is the part that surprised me. It is super quiet. The fans run at 50 percent. The liquid cooling does the heavy lifting. The water absorbs the heat. The radiator dissipates it. The fans just move air gently.
I expected aircraft noise. I got library quiet. I can be in the same room during training. I can take calls. I can think. My previous air-cooled card sounded like it was preparing for takeoff. This one hums.
Liquid cooling is cheating. It is expensive cheating. But it is cheating that lets me overclock further while making less noise. I will take it.
Why Lock Clocks
Boost clocks vary. They go up when the GPU is cool. They go down when power limits hit. This creates inconsistent training times. One run takes four hours. The next takes four hours and twelve minutes. The next takes three hours and forty-five minutes.
When you are debugging training pipelines, consistency matters. If a run is slower, was it the code? Was it the data? Or did the GPU just decide to throttle? You do not know. You cannot know. Not with variable clocks.
Locked clocks remove the variable. The GPU runs at the same speed every time. Training times are predictable. Debugging is easier. You know exactly what performance to expect.
The Risk
Overclocking is risky. Pushing too hard causes crashes. Artifacts. Instability. I spent a week finding the right numbers. Too high and training fails. Too low and I leave performance on the table. The sweet spot is narrow.
Memory overclocking is especially finicky. GDDR7 is fast. It is also sensitive. Push it too far and you get corrupted tensors. Silent failures. The training completes but the model is garbage. You do not realize until hours later.
Training Performance
With locked clocks my training runs are consistent. A fine-tuning job that used to vary between three and four hours now takes exactly three hours and twenty-two minutes. Every time. I can plan around this. I can queue jobs. I can sleep without wondering if the GPU throttled overnight.
The overclock also means faster iteration. Thirty-five extra epochs per day. More experiments. More failures. More learning. The time savings compound. A week of training becomes five days. A month becomes three weeks.
Is It Worth It
For most people? No. The performance gain is marginal. The risk is real. The time spent tuning is better spent actually training models. I am not recommending everyone do this.
For me? Yes. I run training jobs constantly. I need consistency. I need predictability. I need to know that when I start a job it will finish at the expected time. Locked clocks give me that.
Also I already bought the card. I already bought the cooler. I already spent the money. I might as well squeeze every megahertz out of it. This is not financial advice. This is poor decision making with commitment.
Final Thoughts
My GPU is overclocked. My clocks are locked. My training is stable. My power draw is constant. My temperatures are flat. My fans are quiet. My electricity bill is terrifying. I have no regrets.
If you are going to try this, start small. Test thoroughly. Monitor temperatures. Be prepared for crashes. And accept that you might spend more time tuning than training. That is the hobby.